OpenCV基于DLCO描述子匹配
一:局部特征描述子介绍
2014年VGG发表了一篇基于凸优化的局部特征描述子学习(DLCO)的论文,OpenCV3.2以后在扩展模块中对该论文的完成了代码实现并发布了API支持,提供了基于DLCO的描述子生成支持、基于生成的描述子,可以实现图像特征匹配的对象识别。关于特征描述子学习相关的细节可以看这里:
http://www.robots.ox.ac.uk/~vgg/software/learn_desc/
提供了描述子模型,学习数据,C++版本实现的源代码下载
二:OpenCV程序演示
OpenCV中VGG的DLCO描述子生成支持下面几种
VGG_120 = 100,
VGG_80 = 101,
VGG_64 = 102,
VGG_48 = 103
默认支持输出描述子是120个向量即VGG_120。基于DLCO在OpenCV中代码实现对象检测与匹配大致分为如下几步:
1.加载图像
Mat box = imread("D:/vcprojects/images/box.png");
Mat scene = imread("D:/vcprojects/images/box_in_scene.png");
imshow("box image", box);
imshow("scene image", scene);
2.关键点检测(SURF)
Ptr<SURF> detector = SURF::create();
int minHessian = 400;
vector<KeyPoint> keypoints_1, keypoints_2;
detector->setHessianThreshold(minHessian);
detector->detect(box, keypoints_1);
detector->detect(box_scene, keypoints_2);
3.描述子生成(DLCO)
Ptr<VGG> vgg_descriptor = VGG::create();
Mat descriptors_1, descriptors_2;
vgg_descriptor->compute(box, keypoints_1, descriptors_1);
vgg_descriptor->compute(box_scene, keypoints_2, descriptors_2);
4.特征匹配实现对象识别
// 计算匹配点
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match(descriptors_1, descriptors_2, matches);
double max_dist = 0; double min_dist = 100;
// 计算最大与最小距离
for (int i = 0; i < descriptors_1.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
// 寻找最佳匹配,距离越小越好
std::vector< DMatch > good_matches;
for (int i = 0; i < descriptors_1.rows; i++)
{
if (matches[i].distance <= min(2 * min_dist, 1.5))
{
good_matches.push_back(matches[i]);
}
}
// 绘制最终匹配点
Mat img_matches;
drawMatches(box, keypoints_1, box_scene, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (size_t i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_1[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_2[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, RANSAC);
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(box.cols, 0);
obj_corners[2] = cvPoint(box.cols, box.rows); obj_corners[3] = cvPoint(0, box.rows);
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(img_matches, scene_corners[0] + Point2f(box.cols, 0), scene_corners[1] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[1] + Point2f(box.cols, 0), scene_corners[2] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[2] + Point2f(box.cols, 0), scene_corners[3] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[3] + Point2f(box.cols, 0), scene_corners[0] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
//-- Show detected matches
imshow("Good Matches & Object detection", img_matches);
原图:
特征匹配结果
更多相关文章阅读
图像处理之理解Homography matrix(单应性矩阵)
知不足者好学,
耻下问者自满!
关注【OpenCV学堂】
长按或者扫码下面二维码即可关注
OpenCV深度学习
+群 573300093